|
A computer number format is the internal representation of numeric values in digital computer and calculator hardware and software.〔 〕 Normally, numeric values are stored as groupings of bits, named for the number of bits that compose them. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the bit format used by the computer's instruction set generally requires conversion for external use such as printing and display. Different types of processors may have different internal representations of numerical values. Different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory. ==Binary number representation== Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes. A ''bit'' is a binary digit that represents one of two states. The concept of a bit can be understood as a value of either ''1'' or ''0'', ''on'' or ''off'', ''yes'' or ''no'', ''true'' or ''false'', or encoded by a switch or toggle of some kind. While a single bit, on its own, is able to represent only two values, a string of bits may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1. As the number of bits composing a string increases, the number of possible ''0'' and ''1'' combinations increases exponentially. While a single bit allows only two value-combinations and two bits combined can make four separate values and so on. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2. Groupings with a specific number of bits are used to represent varying things and have specific names. A ''byte'' is a bit string containing the number of bits needed to represent a character. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.〔(【引用サイトリンク】url=http://catb.org/~esr/jargon/html/B/byte.html )〕 In many computer architectures, the byte is used to address specific areas of memory. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many CPUs read data in some multiple of eight bits.〔(【引用サイトリンク】url=http://www.networkdictionary.com/hardware/mc.php )〕 Because the byte size of eight bits is so common, but the definition is not standardized, the term octet is sometimes used to explicitly describe an eight bit sequence. A ''nibble'' (sometimes ''nybble''), is a number composed of four bits.〔(【引用サイトリンク】url=http://catb.org/~esr/jargon/html/N/nybble.html )〕 Being a half-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a hexadecimal digit.〔(【引用サイトリンク】url=http://www.techterms.com/definition/nybble )〕 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「computer number format」の詳細全文を読む スポンサード リンク
|